Traditionally, data analysis and theory have been viewed as separate disciplines, each feeding into fundamentally different types of models. Modern deep learning technology is beginning to unify these two disciplines and will produce a new class of predictively powerful space weather models that combine the physical insights gained by data and theory. We call on NASA to invest in the research and infrastructure necessary for the heliophysics' community to take advantage of these advances.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
估计不确定性是进行HEP中科学测量的核心:如果没有估计其不确定性,测量是无用的。不确定性量化(UQ)的目的是与这个问题密不可分的:“我们如何在身体和统计上解释这些不确定性?”这个问题的答案不仅取决于我们要执行的计算任务,还取决于我们用于该任务的方法。对于HEP中的人工智能(AI)应用,在几个领域中,可解释的UQ方法至关重要,包括推理,仿真和控制/决策。这些领域中的每个领域都有一些方法,但尚未被证明像当前在物理学中使用的更传统的方法一样值得信赖(例如,非AI经常主义者和贝叶斯方法)。阐明上面的问题需要更多地了解AI系统的相互作用和不确定性量化。我们简要讨论每个领域的现有方法,并将其与HEP跨越的任务联系起来。然后,我们讨论了途径的建议,以开发必要的技术,以在接下来的十年中可靠地使用AI与UQ使用。
translated by 谷歌翻译
鉴于HEP研究的核心,数据科学(DS)和机器学习(ML)在高能量物理学(HEP)中的作用增长良好和相关。此外,利用物理数据固有的对称性激发了物理信息的ML作为计算机科学研究的充满活力的子场。 HEP研究人员从广泛使用的材料中受益匪浅,可用于教育,培训和劳动力开发。他们还为这些材料做出了贡献,并为DS/ML相关的字段提供软件。物理部门越来越多地在DS,ML和物理学的交集上提供课程,通常使用HEP研究人员开发的课程,并涉及HEP中使用的开放软件和数据。在这份白皮书中,我们探讨了HEP研究与DS/ML教育之间的协同作用,讨论了此交叉路口的机会和挑战,并提出了将是互惠互利的社区活动。
translated by 谷歌翻译
State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 1.6$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 1.3B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.
translated by 谷歌翻译
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realize global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissues structures. Inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting anatomies of 133 structures in brain, 14 organs in abdomen, 4 hierarchical components in kidney, and inter-connected kidney tumors). We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in single network, outperforms prior state-of-the-art method SLANT27 ensembled with 27 network tiles, our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively.
translated by 谷歌翻译
缺乏对深度学习系统的洞察力阻碍了他们的系统设计。在科学和工程学中,建模是一种用于了解内部过程不透明的复杂系统的方法。建模用更简单的代理代替复杂的系统,该系统更适合解释。从中汲取灵感,我们使用高斯流程为神经网络构建了一类代理模型。我们没有从神经网络的某些限制案例中得出内核,而是从经验上从神经网络的自然主义行为中学习了高斯过程的内核。我们首先通过两项案例研究评估我们的方法,灵感来自先前对神经网络行为的理论研究,在这些案例研究中,我们捕获了学习低频的神经网络偏好,并确定了深层神经网络中的病理行为。在进一步的实践案例研究中,我们使用学识渊博的内核来预测神经网络的泛化特性。
translated by 谷歌翻译
软机器人技术有可能改变机器人运动,特别是软机器人游泳者提供了一种微创和自适应的解决方案,以探索和保存我们的海洋。不幸的是,当前的软机器人游泳者非常劣于进化的生物游泳者,尤其是在可控性,效率,可操作性和寿命方面。此外,设计软机器人所需的乏味的迭代制造和经验测试阻碍了它们的优化。在这项工作中,我们通过为设计和制造配备静电驱动的软机器人游泳者提供高效且直接的管道来应对这一挑战。我们简化了允许快速增材制造的过程,并显示如何使用可区分的模拟将简化模型与机器人游泳器的真实变形匹配。我们通过改变游泳者的拮抗肌肉的电压和驱动频率来对制造的游泳者进行多个实验。我们展示了在液态油中移动时的电压和频率如何改变游泳者的运动速度,并在前进的游泳速度下观察到明显的最佳选择。我们提出的可区分模拟模型具有各种下游应用,例如游泳者的控制和形状优化;通过我们的SIM到现实匹配,可以将优化结果直接映射回真实机器人。
translated by 谷歌翻译
我们从一组稀疏的光谱时间序列中构建了一个物理参数化的概率自动编码器(PAE),以学习IA型超新星(SNE IA)的内在多样性。 PAE是一个两阶段的生成模型,由自动编码器(AE)组成,该模型在使用归一化流(NF)训练后概率地解释。我们证明,PAE学习了一个低维的潜在空间,该空间可捕获人口内存在的非线性特征范围,并且可以直接从数据直接从数据中准确地对整个波长和观察时间进行精确模拟SNE IA的光谱演化。通过引入相关性惩罚项和多阶段训练设置以及我们的物理参数化网络,我们表明可以在训练期间分离内在和外在的可变性模式,从而消除了需要进行额外标准化的其他模型。然后,我们在SNE IA的许多下游任务中使用PAE进行越来越精确的宇宙学分析,包括自动检测SN Outliers,与数据分布一致的样本的产生以及在存在噪音和不完整数据的情况下解决逆问题限制宇宙距离测量。我们发现,与以前的研究相一致的最佳固有模型参数数量似乎是三个,并表明我们可以用$ 0.091 \ pm 0.010 $ mag标准化SNE IA的测试样本,该样本对应于$ 0.074 \ pm。 0.010 $ mag如果删除了特殊的速度贡献。训练有素的模型和代码在\ href {https://github.com/georgestein/supaernova} {github.com/georgestein/supaernova}上发布
translated by 谷歌翻译
现代车辆依靠通过控制器区域网络(CAN)巴士连接的电子控制装置(ECU)的车队进行关键的车辆控制。但是,随着汽车中高级连通性特征的扩展以及内部系统暴露的风险升高,罐头总线越来越容易受到侵入和注射攻击。普通的注射攻击破坏了CAN数据流的典型定时属性,基于规则的入侵检测系统(IDS)可以轻松检测它们。但是,高级攻击者可以将虚假数据注入到时间序列的感官数据(信号),同时通过CAN消息的模式/频率看起来无害。此类攻击可以绕过基于规则的ID或基于二进制有效载荷数据的任何基于异常的ID。为了使车辆强大地抵抗这种智能攻击,我们提出了CANSHIELD,这是一个基于信号的侵入式检测框架。 Canshield由三个模块组成:一个数据预处理模块,该模块在信号级别处理高维CAN数据流并使其适合深度学习模型;一个由多个深度自动编码器(AE)网络组成的数据分析仪模块,每个网络都从不同的时间角度分析时间序列数据;最后,使用集合方法来做出最终决定的攻击检测模块。对两个高保真信号的评估结果可以攻击数据集显示Canshield在检测高级入侵攻击方面的高精度和反应性。
translated by 谷歌翻译